KLRfome - Kernel Logistic Regression on Focal Mean Embeddings

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse Bayesian kernel logistic regression

In this paper we present a simple hierarchical Bayesian treatment of the sparse kernel logistic regression (KLR) model based MacKay’s evidence approximation. The model is re-parameterised such that an isotropic Gaussian prior over parameters in the kernel induced feature space is replaced by an isotropic Gaussian prior over the transformed parameters, facilitating a Bayesian analysis using stan...

متن کامل

Minimax Estimation of Kernel Mean Embeddings

In this paper, we study the minimax estimation of the Bochner integral μk(P ) := ∫

متن کامل

Twitter Author Profiling Using Word Embeddings and Logistic Regression

The general goal of the author profiling task is to determine various social and demographic aspects of the author based on his pieces of writing. In this work, we propose an approach that combines word embeddings and classical logistic regression for identifying author gender and language variety based on the corresponding tweets. The model was trained on PAN 2017 Twitter Corpus that contains ...

متن کامل

Efficient Online Learning for Large-Scale Sparse Kernel Logistic Regression

In this paper, we study the problem of large-scale Kernel Logistic Regression (KLR). A straightforward approach is to apply stochastic approximation to KLR. We refer to this approach as non-conservative online learning algorithm because it updates the kernel classifier after every received training example, leading to a dense classifier. To improve the sparsity of the KLR classifier, we propose...

متن کامل

Nonparametric Logistic Regression: Reproducing Kernel Hilbert Spaces and Strong Convexity

We study maximum penalized likelihood estimation for logistic regression type problems. The usual difficulties encountered when the log-odds ratios may become large in absolute value are circumvented by imposing a priori bounds on the estimator, depending on the sample size (n) and smoothing parameter. We pay for this in the convergence rate of the mean integrated squared error by a factor logn...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Open Source Software

سال: 2019

ISSN: 2475-9066

DOI: 10.21105/joss.00722